57 research outputs found

    Generic decoupled image-based visual servoing for cameras obeying the unified projection model

    Get PDF
    In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform

    Visual servoing of mobile robots using non-central catadioptric cameras

    Get PDF
    This paper presents novel contributions on image-based control of a mobile robot using a general catadioptric camera model. A catadioptric camera is usually made up by a combination of a conventional camera and a curved mirror resulting in an omnidirectional sensor capable of providing 360° panoramic views of a scene. Modeling such cameras has been the subject of significant research interest in the computer vision community leading to a deeper understanding of the image properties and also to different models for different types of configurations. Visual servoing applications using catadioptric cameras have essentially been using central cameras and the corresponding unified projection model. So far only in a few cases more general models have been used. In this paper we address the problem of visual servoing using the so-called radial model. The radial model can be applied to many camera configurations and in particular to non-central catadioptric systems with mirrors that are symmetric around an axis coinciding with the optical axis. In this case, we show that the radial model can be used with a non-central catadioptric camera to allow effective image-based visual servoing (IBVS) of a mobile robot. Using this model, which is valid for a large set of catadioptric cameras (central or non-central), new visual features are proposed to control the degrees of freedom of a mobile robot moving on a plane. In addition to several simulation results, a set of experiments was carried out on Robot Operating System (ROS)-based platform which validates the applicability, effectiveness and robustness of the proposed method for image-based control of a non-holonomic robot

    Active Estimation of 3D Lines in Spherical Coordinates

    Full text link
    Straight lines are common features in human made environments, which makes them a frequently explored feature for control applications. Many control schemes, like Visual Servoing, require the 3D parameters of the features to be estimated. In order to obtain the 3D structure of lines, a nonlinear observer is proposed. However, to guarantee convergence, the dynamical system must be coupled with an algebraic equation. This is achieved by using spherical coordinates to represent the line's moment vector, and a change of basis, which allows to introduce the algebraic constraint directly on the system's dynamics. Finally, a control law that attempts to optimize the convergence behavior of the observer is presented. The approach is validated in simulation, and with a real robotic platform with a camera onboard.Comment: Accepted in 2019 American Control Conference (ACC) (Final Version

    Rotation Free Active Vision

    Get PDF
    International audience— Incremental Structure from Motion (SfM) algorithms require, in general, precise knowledge of the camera linear and angular velocities in the camera frame for estimating the 3D structure of the scene. Since an accurate measurement of the camera own motion may be a non-trivial task in several robotics applications (for instance when the camera is onboard a UAV), we propose in this paper an active SfM scheme fully independent from the camera angular velocity. This is achieved by considering, as visual features, some rotational invariants obtained from the projection of the perceived 3D points onto a virtual unitary sphere (unified camera model). This feature set is then exploited for designing a rotation-free active SfM algorithm able to optimize online the direction of the camera linear velocity for improving the convergence of the structure estimation task. As case study, we apply our framework to the depth estimation of a set of 3D points and discuss several simulations and experimental results for illustrating the approach

    Visual servoing from three points using a spherical projection model

    Get PDF
    International audienceThis paper deals with visual servoing from three points. Using the geometric properties of the spherical projection of points, a new decoupled set of six visual features is proposed. The main originality lies in the use of the distances between spherical projection of points to define three features that are invariant to camera rotations. The three other features present a linear link with respect to camera rotations. In comparison with the classical perspective coordinates of points, the new decoupled set does not present more singularities. In addition, using the new set in its non-singular domain, a classical control law is proven to be ideal for rotational motions. These theoretical results as well as the robustness to errors of the new decoupled control scheme are illustrated through simulation results

    Robust image-based visual servoing using invariant visual information

    Get PDF
    This paper deals with the use of invariant visual features for visual servoing. New features are proposed to control the 6 degrees of freedom of a robotic system with better linearizing properties and robustness to noise than the state of the art in image-based visual servoing. We show in this paper that by using these features the behavior of image-based visual servoing in task space can be significantly improved. Several experimental results are provided and validate our proposal

    Decoupled image-based visual servoing for cameras obeying the unified projection model

    Get PDF
    This paper proposes a generic decoupled imagebased control scheme for cameras obeying the unified projection model. The scheme is based on the spherical projection model. Invariants to rotational motion are computed from this projection and used to control the translational degrees of freedom. Importantly we form invariants which decrease the sensitivity of the interaction matrix to object depth variation. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6-DOF robotic platform
    • …
    corecore